17 research outputs found

    Analysing correlated noise on the surface code using adaptive decoding algorithms

    Get PDF
    Laboratory hardware is rapidly progressing towards a state where quantum error-correcting codes can be realised. As such, we must learn how to deal with the complex nature of the noise that may occur in real physical systems. Single qubit Pauli errors are commonly used to study the behaviour of error-correcting codes, but in general we might expect the environment to introduce correlated errors to a system. Given some knowledge of structures that errors commonly take, it may be possible to adapt the error-correction procedure to compensate for this noise, but performing full state tomography on a physical system to analyse this structure quickly becomes impossible as the size increases beyond a few qubits. Here we develop and test new methods to analyse blue a particular class of spatially correlated errors by making use of parametrised families of decoding algorithms. We demonstrate our method numerically using a diffusive noise model. We show that information can be learnt about the parameters of the noise model, and additionally that the logical error rates can be improved. We conclude by discussing how our method could be utilised in a practical setting blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected some references added; v3 - accepted to Quantu

    Fault-tolerant error correction with the gauge color code

    Get PDF
    The constituent parts of a quantum computer are inherently vulnerable to errors. To this end we have developed quantum error-correcting codes to protect quantum information from noise. However, discovering codes that are capable of a universal set of computational operations with the minimal cost in quantum resources remains an important and ongoing challenge. One proposal of significant recent interest is the gauge color code. Notably, this code may offer a reduced resource cost over other well-studied fault-tolerant architectures using a new method, known as gauge fixing, for performing the non-Clifford logical operations that are essential for universal quantum computation. Here we examine the gauge color code when it is subject to noise. Specifically we make use of single-shot error correction to develop a simple decoding algorithm for the gauge color code, and we numerically analyse its performance. Remarkably, we find threshold error rates comparable to those of other leading proposals. Our results thus provide encouraging preliminary data of a comparative study between the gauge color code and other promising computational architectures.Comment: v1 - 5+4 pages, 11 figures, comments welcome; v2 - minor revisions, new supplemental including a discussion on correlated errors and details on threshold calculations; v3 - Author accepted manuscript. Accepted on 21/06/16. Deposited on 29/07/16. 9+5 pages, 17 figures, new version includes resource scaling analysis in below threshold regime, see eqn. (4) and methods sectio

    Freely Scalable Quantum Technologies using Cells of 5-to-50 Qubits with Very Lossy and Noisy Photonic Links

    Full text link
    Exquisite quantum control has now been achieved in small ion traps, in nitrogen-vacancy centres and in superconducting qubit clusters. We can regard such a system as a universal cell with diverse technological uses from communication to large-scale computing, provided that the cell is able to network with others and overcome any noise in the interlinks. Here we show that loss-tolerant entanglement purification makes quantum computing feasible with the noisy and lossy links that are realistic today: With a modestly complex cell design, and using a surface code protocol with a network noise threshold of 13.3%, we find that interlinks which attempt entanglement at a rate of 2MHz but suffer 98% photon loss can result in kilohertz computer clock speeds (i.e. rate of high fidelity stabilizer measurements). Improved links would dramatically increase the clock speed. Our simulations employed local gates of a fidelity already achieved in ion trap devices.Comment: corrected typos, additional references, additional figur

    Sequencing of prostate cancers identifies new cancer genes, routes of progression and drug targets

    Get PDF
    Prostate cancer represents a substantial clinical challenge because it is difficult to predict outcome and advanced disease is often fatal. We sequenced the whole genomes of 112 primary and metastatic prostate cancer samples. From joint analysis of these cancers with those from previous studies (930 cancers in total), we found evidence for 22 previously unidentified putative driver genes harboring coding mutations, as well as evidence for NEAT1 and FOXA1 acting as drivers through noncoding mutations. Through the temporal dissection of aberrations, we identified driver mutations specifically associated with steps in the progression of prostate cancer, establishing, for example, loss of CHD1 and BRCA2 as early events in cancer development of ETS fusion-negative cancers. Computational chemogenomic (canSAR) analysis of prostate cancer mutations identified 11 targets of approved drugs, 7 targets of investigational drugs, and 62 targets of compounds that may be active and should be considered candidates for future clinical trials

    Pharmacogenetic meta-analysis of genome-wide association studies of LDL cholesterol response to statins

    Get PDF
    Statins effectively lower LDL cholesterol levels in large studies and the observed interindividual response variability may be partially explained by genetic variation. Here we perform a pharmacogenetic meta-analysis of genome-wide association studies (GWAS) in studies addressing the LDL cholesterol response to statins, including up to 18,596 statin-treated subjects. We validate the most promising signals in a further 22,318 statin recipients and identify two loci, SORT1/CELSR2/PSRC1 and SLCO1B1, not previously identified in GWAS. Moreover, we confirm the previously described associations with APOE and LPA. Our findings advance the understanding of the pharmacogenetic architecture of statin response

    Clinical Sequencing Exploratory Research Consortium: Accelerating Evidence-Based Practice of Genomic Medicine

    Get PDF
    Despite rapid technical progress and demonstrable effectiveness for some types of diagnosis and therapy, much remains to be learned about clinical genome and exome sequencing (CGES) and its role within the practice of medicine. The Clinical Sequencing Exploratory Research (CSER) consortium includes 18 extramural research projects, one National Human Genome Research Institute (NHGRI) intramural project, and a coordinating center funded by the NHGRI and National Cancer Institute. The consortium is exploring analytic and clinical validity and utility, as well as the ethical, legal, and social implications of sequencing via multidisciplinary approaches; it has thus far recruited 5,577 participants across a spectrum of symptomatic and healthy children and adults by utilizing both germline and cancer sequencing. The CSER consortium is analyzing data and creating publically available procedures and tools related to participant preferences and consent, variant classification, disclosure and management of primary and secondary findings, health outcomes, and integration with electronic health records. Future research directions will refine measures of clinical utility of CGES in both germline and somatic testing, evaluate the use of CGES for screening in healthy individuals, explore the penetrance of pathogenic variants through extensive phenotyping, reduce discordances in public databases of genes and variants, examine social and ethnic disparities in the provision of genomics services, explore regulatory issues, and estimate the value and downstream costs of sequencing. The CSER consortium has established a shared community of research sites by using diverse approaches to pursue the evidence-based development of best practices in genomic medicine

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Coherence Properties of Shallow Donor Qubits in Zn

    No full text
    corecore